Learning Separable Filters

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Separable Filters with Shared Parts

Learned image features can provide great accuracy in many Computer Vision tasks. However, when the convolution filters used to learn image features are numerous and not separable, feature extraction becomes computationally demanding and impractical to use in real-world situations. In this thesis work, a method for learning a small number of separable filters to approximate an arbitrary non-sepa...

متن کامل

Learning of Separable Filters by Stacked Fisher Convolutional Autoencoders

Learning of convolutional filters in deep neural networks proves high efficiency to provide sparse representations for the purpose of image recognition. The computational cost of these networks can be alleviated by focusing on separable filters to reduce the number of learning parameters. Autoencoders are a family of powerful deep networks to build scalable generative models for automatic featu...

متن کامل

Supplemental Material for the paper “ Learning Separable Filters

Figure 1 illustrates some filter banks learned on the DRIVE dataset. In particular, it shows how an example learned filter bank can be replaced by its rank-1 approximation obtained using the SVD decomposition (SEP-SVD). Figure 2 shows examples of 3D filter banks learned on the OPF dataset. In Figure 3 the central slice of the filter banks are given for a better comparison. We report the detaile...

متن کامل

Recursive Separable Schemes for Nonlinear Diffusion Filters

Poor eeciency is a typical problem of nonlinear diiusion ltering, when the simple and popular explicit (Euler-forward) scheme is used: for stability reasons very small time step sizes are necessary. In order to overcome this shortcoming, a novel type of semi-implicit schemes is studied, so-called additive operator splitting (AOS) methods. They share the advantages of explicit and (semi-)implici...

متن کامل

Learning Linearly Separable Languages

For a finite alphabet A, we define a class of embeddings of A∗ into an infinite-dimensional feature space X and show that its finitely supported hyperplanes define regular languages. This suggests a general strategy for learning regular languages from positive and negative examples. We apply this strategy to the piecewise testable languages, presenting an embedding under which these are precise...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence

سال: 2015

ISSN: 0162-8828,2160-9292

DOI: 10.1109/tpami.2014.2343229